46 research outputs found

    TINJAUAN HUKUM ISLAM TERHADAP PRAKTIK JUAL BELI RAMBUT SISTEM BORONGAN PADA JASA POTONG RAMBUT

    Get PDF
    The practice of buying and selling wholesale systems is one of the buying and selling practices that is commonly carried out by many business actors, one of which is a haircut service business actor. With the many practices of this wholesale system, it is necessary to have a clear review of Islamic law as the basis for the practice of buying and selling. Several opinions from various schools of thought regarding wholesale buying and selling and buying and selling of prohibited materials can become the basis for the practice of buying and selling wholesale carried out by hair cutting services

    البديع في سورة المدثر: دراسة تحليلية بلاغية

    Get PDF
    مستخلص البحث القرآن الكريم هو معجزة عظيمة للمسلمين في هذا العالم، هو كلام االله المعجز في تشريعه وعلومه ولغته ومصدر الأحكام من العقيدة والأدب. وليس من العجب أنه ذهب قوم إلى أن القرآن معجز ببلاغته. وكان العلم البلاغة ينقسم إلى ثلاثة أقسام هو المعاني والبيان والبديع. وفي هذا البحث اختار الباحث علم البديع لتفصيل الإيضاح وكشف الأسرار وبليغ التامة والصحية، واختار الباحث بسورة المدر لأن هذه السورة كثيرة من أنواع البديعية. مشكلات هذا البحث هي : ما هي الآيات التي تتضمن المحسنات اللفظية البديعية في سورة المدثر؟. أما أهدافه فهي: معرفة الآيات التي تتضمن المحسنات اللفظية البديعية في سورة المدثر. وفي هذا البحث حدد الباحث بعض من أنواع البديعية لفظيا. المنهج الذي يستخدمه الباحث هي المنهج الوصفي الكيقي أي كان الباحث تحلل الموضوع كلمة فكلمة وفي نتائج البحث كان البحث لا يستعمل الرقم لكن بإعطاء التفسير والشرح. ومصدر هذا البحث هي القرآن الكريم سورة المدثر كتب التفسير وكتب البلاغة وما يتعلق بحا. كانت الطريقة المستخدمة في هذا البحث هي طريقة البحث المنهجي الكيفي (qua litative) لتناسبها بالأمور المحللة. وطريقة جمع البيانات اّلتي استخدمها الباحث دراسة مكتبية (research Library). وفي تحليل البيانات، يستخدم الباحث بتحليل المضمون (Analysis Content). وعملية التحليل على الترتيب التالى هي: التحليل الأولي أو الدراسة الاستطلاعية في تحليل البلاغية، وتحليل متن البحث واستغلال مختلف بياناته، والتفسير العام لنتائج البحث وتأويلها. أما تفصيلها فهي قرأ الباحث مصادر البيانات لتفهيم تراكيبها ومعانيها كلمة فكلمة، وخاصة في قراءة سورة المدثر. ونتيجة البحث أن الآيات التي تتضمن المحسنات اللفظية البديعية في سورة المدثر هي: ١). الآيات التي تتضمن الجناس: خمس آيات. ٢). الآيات التي تتضمن الازدزاج: آية واحدة. ٣). الآيات التي تتضمن السجع: ثلاث وخمسون آية. ٤). الآيات التي تتضمن الموازنة : آيتين. ٥). الآيات التي تتضمن الترصيع : ثلاث آيات. فالمحسنات اللفظية البديعية الموجودة في سورة المدثر فهي: الجناس، الازدواج، السجع، الموازنة، و الترصيع

    CommuNety: deep learning-based face recognition system for the prediction of cohesive communities

    Get PDF
    Effective mining of social media, which consists of a large number of users is a challenging task. Traditional approaches rely on the analysis of text data related to users to accomplish this task. However, text data lacks significant information about the social users and their associated groups. In this paper, we propose CommuNety, a deep learning system for the prediction of cohesive networks using face images from photo albums. The proposed deep learning model consists of hierarchical CNN architecture to learn descriptive features related to each cohesive network. The paper also proposes a novel Face Co-occurrence Frequency algorithm to quantify existence of people in images, and a novel photo ranking method to analyze the strength of relationship between different individuals in a predicted social network. We extensively evaluate the proposed technique on PIPA dataset and compare with state-of-the-art methods. Our experimental results demonstrate the superior performance of the proposed technique for the prediction of relationship between different individuals and the cohesiveness of communities

    LodgeNet: an automated framework for precise detection and classification of wheat lodging severity levels in precision farming

    Get PDF
    Wheat lodging is a serious problem affecting grain yield, plant health, and grain quality. Addressing the lodging issue in wheat is a desirable task in breeding programs. Precise detection of lodging levels during wheat screening can aid in selecting lines with resistance to lodging. Traditional approaches to phenotype lodging rely on manual data collection from field plots, which are slow and laborious, and can introduce errors and bias. This paper presents a framework called ‘LodgeNet,’ that facilitates wheat lodging detection. Using Unmanned Aerial Vehicles (UAVs) and Deep Learning (DL), LodgeNet improves traditional methods of detecting lodging with more precision and efficiency. Using a dataset of 2000 multi-spectral images of wheat plots, we have developed a novel image registration technique that aligns the different bands of multi-spectral images. This approach allows the creation of comprehensive RGB images, enhancing the detection and classification of wheat lodging. We have employed advanced image enhancement techniques to improve image quality, highlighting the important features of wheat lodging detection. We combined three color enhancement transformations into two presets for image refinement. The first preset, ‘Haze & Gamma Adjustment,’ minimize atmospheric haze and adjusts the gamma, while the second, ‘Stretching Contrast Limits,’ extends the contrast of the RGB image by calculating and applying the upper and lower limits of each band. LodgeNet, which relies on the state-of-the-art YOLOv8 deep learning algorithm, could detect and classify wheat lodging severity levels ranging from no lodging (Class 1) to severe lodging (Class 9). The results show the mean Average Precision (mAP) of 0.952% @0.5 and 0.641% @0.50-0.95 in classifying wheat lodging severity levels. LodgeNet promises an efficient and automated high-throughput solution for real-time crop monitoring of wheat lodging severity levels in the field

    Color Based Skin Classification

    Get PDF
    Skin detection is used in applications ranging from face detection, tracking body parts and hand gesture analysis, to retrieval and blocking objectionable content. In this paper, we investigate and evaluate (1) the effect of color space transformation on skin detection performance and finding the appropriate color space for skin detection, (2) the role of the illuminance component of a color space, (3) the appropriate pixel based skin color modeling technique and finally, (4) the effect of color constancy algorithms on color based skin classification. The comprehensive color space and skin color modeling evaluation will help in the selection of the best combinations for skin detection. Nine skin modeling approaches (AdaBoost, Bayesian network, J48, Multilayer Perceptron, Naive Bayesian, Random Forest, RBF network, SVM and the histogram approach of Jones and Rehg [15]) in six color spaces (IHLS, HSI, RGB, normalized RGB, YCbCr and CIELAB) with the presence or absence of the illuminance component are compared and evaluated. Moreover, the impact of five color constancy algorithms on skin detection is reported. Results on a database of 8991 images with manually annotated pixel-level ground truth show that (1) the cylindrical color spaces outperform other color spaces, (2) the absence of the illuminance component decreases performance, (3) the selection of an appropriate skin color modeling approach is important and that the tree based classifiers (Random forest, J48) are well suited to pixel based skin detection. As a best combination, the Random Forest combined with the cylindrical color spaces, while keeping the illuminance component outperforms other combinations, and (4) the usage of color constancy algorithms can improve skin detection performance

    Real-time mobile robot self-localization : a stereo vision based approach

    No full text
    Zsfassung in dt. SpracheDas Ziel dieser Arbeit ist die Echtzeit-Selbstlokalisation von kleinen, autonomen, mobilen Robotern in bekannten, aber hochdynamischen Umgebungen basierend auf Bildverarbeitung. Dabei wird eine erste Schätzung der Position kontinuierlich aktualisiert. Für den Algorithmus zur Lokalisierung ist es nicht notwendig, zusätzliche künstliche Markierungen oder spezielle Strukturen in der Umgebung anzubringen.Weiters ist es nicht notwendig, dass sich Orientierungspunkte direkt am Boden oder in Bodennähe befinden. Der Algorithmus erlaubt dem Roboter, seine Ausgangsposition zu bestimmen und seine aktuelle Position während der Bewegung ständig zu aktualisieren.Eine Schätzung der Position in einer gleich bleibenden Umgebung erfolgt normalerweise von der Roboterposition aus über Entfernungs- oder Winkelmessungen zwischen bekannten Orientierungspunkten. Eine andere Möglichkeit ist der Vergleich einer lokal aus Sensordaten erstellten Landkarte mit einer vorab gespeicherten, globalen Karte der Umgebung. Im Gegensatz zu anderen Lokalisationsalgorithmen werden in dieser Arbeit aus der Stereobildverarbeitung gewonnene Tiefeninformationen verwendet.Erkannte Orientierungspunkte werden mit Trilaterationsverfahren in eine unvollständige 3D-Karte der unmittelbaren Umgebung transformiert, die dann mit dem globalen Modell verglichen wird. Für die Modellerstellung werden Längeninformationen verwendet, da man bei dieser Methode weniger Orientierungspunkte als bei der winkelbasierten benötigt. Das Stereobildverarbeitungssystem ist auf dem drehbaren Kopf des Roboters montiert.In dieser Arbeit wird Gradient Based Hough Transform (GBHT) als Basis für die Merkmalsextraktion verwendet. GBHT wurde gewählt, da es die stärkste Gruppierung von kolinearen Pixel mit ähnlicher Kantenausrichtung erlaubt. Eine kontinuierliche, vollständige Bestimmung der eigenen Position wäre sehr rechenintensiv und nur bei genügend sichtbaren Orientierungspunkten möglich. Daher wird die einmal bestimmte Position durch die robotereigenen Sensorwerte aktualisiert. Diese Methode ist viel schneller und ausreichend genau, da sich die Summenfehler innerhalb kurzer Zeitintervalle unterdrücken lassen. Zum Vereinigen der verschiedenen Sensorwerte wird der Extended Kalman Filter verwendet. Eine aus den Sensordaten gewonnene grobe Schätzung der Position wird als Eingangsgröße bei der Merkmalsextraktion und dem Vergleich mit ein Weltkarte verwendet und erhöht die Genauigkeit.Bedeutende Leistungsverbesserungen sind mit einer neuen hybriden Methode erzielt worden, die die globale Positionsschätzung mit der Spurhaltung kombiniert. Simulationsergebnisse der Kartenerstellung, Merkmalsextraktion, Berechnung der Tiefeninformation, kombinierten Informationsverarbeitung mehrerer Sensorwerte und ein Test der Programmstruktur sind der Arbeit beigefügt. Die Ergebnisse dieser Arbeit können verwendet werden, um die Anzahl der notwendigen Orientierungspunkte für einen Roboter mit Bildverarbeitung zu minimieren.The main focus of this thesis is vision based real time self-localization of tiny autonomous mobile robots in a known but highly dynamic environment. The problem covers tracking the position with an initial estimate to global self-localization. The localization algorithm is not dependent on the presence of artificial landmarks or special structures in the environment nor does it require that features should lie on or close to the ground plane. The algorithm enables the robot to find its initial position and to verify its location during every movement.Global position estimation in unmodified environments normally involves measuring distance to or angles between distinct features (natural landmarks) from the robot position or matching a local map constructed from sensor readings to a global map of the environment. On contrary to other localization algorithms stereo vision based depth computation is used for self-localization. A localization framework is in progress that uses trilateration based techniques whenever distinct landmark features are extracted. The trilateration based method is complemented by a sparse 3D map of the local environment constructed based on sensor data and matching it with the environment model. The stereo vision system is mounted on a pivoted head as an aid in feature exploration. Distance measurements are used as they require fewer landmarks compared to methods using angle measurements.Visual features are extracted using Gradient Based Hough Transform (GBHT), which provides the strongest groupings of collinear pixels having roughly the same edge orientation. Global self-localization is computationally slow and sometimes impossible if enough features are not available. Therefore, once the robot position is computed it is tracked with local sensors. This is fast and reasonably accurate as the accumulating error is suppressed after short intervals. Extended Kalman filter is used to fuse information from multiple heterogeneous sensors.Keeping a rough estimate of the robot position helps in features extraction and matching with the global map. Significant performance improvements have been achieved with a new hybrid method that combines the global position estimation with tracking. Simulation results for the robot environment modeling, feature extraction, depth computation, information fusion, and initial test of the framework have been reported. As such marked minimization of landmarks for vision based self-localization of robots has been achieved.15

    Visible Spectrum and Infra-Red Image Matching: A New Method

    No full text
    Textural and intensity changes between Visible Spectrum (VS) and Infra-Red (IR) images degrade the performance of feature points. We propose a new method based on a regression technique to overcome this problem. The proposed method consists of three main steps. In the first step, feature points are detected from VS-IR images and Modified Normalized (MN)-Scale Invariant Feature Transform (SIFT) descriptors are computed. In the second step, correct MN-SIFT descriptor matches are identified between VS-IR images with projection error. A regression model is trained on correct MN-SIFT descriptors. In the third step, the regression model is used to process the MN-SIFT descriptors of test VS images in order to remove misalignment with the MN-SIFT descriptors of test IR images and to overcome textural and intensity changes. Experiments are performed on two different VS-IR image datasets. The experimental results show that the proposed method works really well and demonstrates on average 14% and 15% better precision and matching scores compared to recently proposed Histograms of Directional Maps (HoDM) descriptor

    Evaluation of model generalization for growing plants using conditional learning

    No full text
    This paper aims to solve the lack of generalization of existing semantic segmentation models in the crop and weed segmentation domain. We compare two training mechanisms, classical and adversarial, to understand which scheme works best for a particular encoder-decoder model. We use simple U-Net, SegNet, and DeepLabv3+ with ResNet-50 backbone as segmentation networks. The models are trained with cross-entropy loss for classical and PatchGAN loss for adversarial training. By adopting the Conditional Generative Adversarial Network (CGAN) hierarchical settings, we penalize different Generators (G) using PatchGAN Discriminator (D) and L1 loss to generate segmentation output. The generalization is to exhibit fewer failures and perform comparably for growing plants with different data distributions. We utilize the images from four different stages of sugar beet. We divide the data so that the full-grown stage is used for training, whereas earlier stages are entirely dedicated to testing the model. We conclude that U-Net trained in adversarial settings is more robust to changes in the dataset. The adversarially trained U-Net reports 10% overall improvement in the results with mIOU scores of 0.34, 0.55, 0.75, and 0.85 for four different growth stages
    corecore